在过去的几年中,关于分类,检测和分割问题的3D学习领域取得了重大进展。现有的绝大多数研究都集中在规范的封闭式条件上,忽略了现实世界的内在开放性。这限制了需要管理新颖和未知信号的自主系统的能力。在这种情况下,利用3D数据可以是有价值的资产,因为它传达了有关感应物体和场景几何形状的丰富信息。本文提供了关于开放式3D学习的首次广泛研究。我们介绍了一种新颖的测试床,其设置在类别语义转移方面的难度增加,并且涵盖了内域(合成之间)和跨域(合成对真实)场景。此外,我们研究了相关的分布情况,并开放了2D文献,以了解其最新方法是否以及如何在3D数据上有效。我们广泛的基准测试在同一连贯的图片中定位了几种算法,从而揭示了它们的优势和局限性。我们的分析结果可能是未来量身定制的开放式3D模型的可靠立足点。
translated by 谷歌翻译
语义新颖性检测旨在发现测试数据中未知类别。此任务在安全至关重要的应用中特别相关,例如自动驾驶或医疗保健,在部署时间识别未知物体并相应地向用户发出警告至关重要。尽管深度学习研究取得了令人印象深刻的进步,但现有模型仍然需要在已知类别上进行填充阶段才能识别未知类别。当隐私规则限制数据访问或严格的内存和计算约束(例如边缘计算)时,这可能是令人难以置信的。我们声称,量身定制的表示策略可能是有效,有效的语义新颖性检测的正确解决方案。除了对此任务的最新方法进行最新的方法外,我们还提出了一种基于关系推理的新表示学习范式。它着重于学习如何衡量语义相似性而不是识别已知类别。我们的实验表明,这些知识可直接传输到各种场景,并且可以用作插件模块,以将封闭设置的识别模型转换为可靠的开放式开放集。
translated by 谷歌翻译
物理模拟器在安全,不受约束的环境中方便学习加强学习政策表现出了巨大的希望。但是,由于现实差距,将获得的知识转移到现实世界可能会具有挑战性。为此,最近已经提出了几种方法来自动调整具有后验分布的实际数据,以在训练时与域随机化一起使用。这些方法已被证明在不同的设置和假设下适用于各种机器人任务。然而,现有文献缺乏对转移性能和实际数据效率的现有自适应域随机方法的详尽比较。在这项工作中,我们为离线和在线方法(Simopt,Bayrn,Droid,Dropo)提供了一个开放的基准,以阐明最适合每个设置和手头的任务。我们发现,在线方法受到下一次迭代的当前学会策略的质量受到限制,而离线方法有时可能会在使用开环命令中模拟中重播轨迹时失败。所使用的代码将在https://github.com/gabrieletiboni/adr-benchmark上发布。
translated by 谷歌翻译
对于任何有价值的自治分子,知识不能仅限于制造商注入的有价值的自治药物而言,进化的能力都是基本的。例如,考虑家庭助理机器人:要求时它应该能够逐步学习新对象类别,也应该在不同的环境(房间)和姿势(手持/在地板上/上方的家具上方)中识别相同的对象,同时拒绝未知的。尽管它很重要,但这种情况才开始引起对机器人社区的兴趣,并且相关研究仍处于起步阶段,现有的实验测试床位,但没有量身定制的方法。通过这项工作,我们提出了第一种学习方法,该方法通过利用单个对比目标来立即涉及所有前面提到的挑战。我们展示了它如何学习功能空间非常适合逐步包括新类,并能够捕获在各种视觉域中概括的知识。我们的方法赋予了每个学习情节的量身定制的有效停止标准,并利用了一个自进度的阈值策略,该策略为分类器提供了可靠的拒绝选项。这两种新颖的贡献均基于对数据统计数据的观察,不需要手动调整。广泛的实验分析证实了拟议方法在建立新的最新技术方面的有效性。该代码可在https://github.com/francescocappio/contrastive_open_world上找到。
translated by 谷歌翻译
Human adaptability relies crucially on the ability to learn and merge knowledge both from supervised and unsupervised learning: the parents point out few important concepts, but then the children fill in the gaps on their own. This is particularly effective, because supervised learning can never be exhaustive and thus learning autonomously allows to discover invariances and regularities that help to generalize. In this paper we propose to apply a similar approach to the task of object recognition across domains: our model learns the semantic labels in a supervised fashion, and broadens its understanding of the data by learning from self-supervised signals how to solve a jigsaw puzzle on the same images. This secondary task helps the network to learn the concepts of spatial correlation while acting as a regularizer for the classification task. Multiple experiments on the PACS, VLCS, Office-Home and digits datasets confirm our intuition and show that this simple method outperforms previous domain generalization and adaptation solutions. An ablation study further illustrates the inner workings of our approach.
translated by 谷歌翻译
Self-training has been shown to be helpful in addressing data scarcity for many domains, including vision, speech, and language. Specifically, self-training, or pseudo-labeling, labels unsupervised data and adds that to the training pool. In this work, we investigate and use pseudo-labeling for a recently proposed novel setup: joint transcription and translation of speech, which suffers from an absence of sufficient data resources. We show that under such data-deficient circumstances, the unlabeled data can significantly vary in domain from the supervised data, which results in pseudo-label quality degradation. We investigate two categories of remedies that require no additional supervision and target the domain mismatch: pseudo-label filtering and data augmentation. We show that pseudo-label analysis and processing as such results in additional gains on top of the vanilla pseudo-labeling setup resulting in total improvements of up to 0.6% absolute WER and 2.2 BLEU points.
translated by 谷歌翻译
Simulating rigid collisions among arbitrary shapes is notoriously difficult due to complex geometry and the strong non-linearity of the interactions. While graph neural network (GNN)-based models are effective at learning to simulate complex physical dynamics, such as fluids, cloth and articulated bodies, they have been less effective and efficient on rigid-body physics, except with very simple shapes. Existing methods that model collisions through the meshes' nodes are often inaccurate because they struggle when collisions occur on faces far from nodes. Alternative approaches that represent the geometry densely with many particles are prohibitively expensive for complex shapes. Here we introduce the Face Interaction Graph Network (FIGNet) which extends beyond GNN-based methods, and computes interactions between mesh faces, rather than nodes. Compared to learned node- and particle-based methods, FIGNet is around 4x more accurate in simulating complex shape interactions, while also 8x more computationally efficient on sparse, rigid meshes. Moreover, FIGNet can learn frictional dynamics directly from real-world data, and can be more accurate than analytical solvers given modest amounts of training data. FIGNet represents a key step forward in one of the few remaining physical domains which have seen little competition from learned simulators, and offers allied fields such as robotics, graphics and mechanical design a new tool for simulation and model-based planning.
translated by 谷歌翻译
Continuous pseudo-labeling (PL) algorithms such as slimIPL have recently emerged as a powerful strategy for semi-supervised learning in speech recognition. In contrast with earlier strategies that alternated between training a model and generating pseudo-labels (PLs) with it, here PLs are generated in end-to-end manner as training proceeds, improving training speed and the accuracy of the final model. PL shares a common theme with teacher-student models such as distillation in that a teacher model generates targets that need to be mimicked by the student model being trained. However, interestingly, PL strategies in general use hard-labels, whereas distillation uses the distribution over labels as the target to mimic. Inspired by distillation we expect that specifying the whole distribution (aka soft-labels) over sequences as the target for unlabeled data, instead of a single best pass pseudo-labeled transcript (hard-labels) should improve PL performance and convergence. Surprisingly and unexpectedly, we find that soft-labels targets can lead to training divergence, with the model collapsing to a degenerate token distribution per frame. We hypothesize that the reason this does not happen with hard-labels is that training loss on hard-labels imposes sequence-level consistency that keeps the model from collapsing to the degenerate solution. In this paper, we show several experiments that support this hypothesis, and experiment with several regularization approaches that can ameliorate the degenerate collapse when using soft-labels. These approaches can bring the accuracy of soft-labels closer to that of hard-labels, and while they are unable to outperform them yet, they serve as a useful framework for further improvements.
translated by 谷歌翻译
Cancer is one of the leading causes of death worldwide. It is caused by a variety of genetic mutations, which makes every instance of the disease unique. Since chemotherapy can have extremely severe side effects, each patient requires a personalized treatment plan. Finding the dosages that maximize the beneficial effects of the drugs and minimize their adverse side effects is vital. Deep neural networks automate and improve drug selection. However, they require a lot of data to be trained on. Therefore, there is a need for machine-learning approaches that require less data. Hybrid quantum neural networks were shown to provide a potential advantage in problems where training data availability is limited. We propose a novel hybrid quantum neural network for drug response prediction, based on a combination of convolutional, graph convolutional, and deep quantum neural layers of 8 qubits with 363 layers. We test our model on the reduced Genomics of Drug Sensitivity in Cancer dataset and show that the hybrid quantum model outperforms its classical analog by 15% in predicting IC50 drug effectiveness values. The proposed hybrid quantum machine learning model is a step towards deep quantum data-efficient algorithms with thousands of quantum gates for solving problems in personalized medicine, where data collection is a challenge.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译